AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Multi-level quantization adaptation

# Multi-level quantization adaptation

Thedrummer Anubis 70B V1.1 GGUF
This is a quantized version of a large language model with 70B parameters, offering multiple quantization levels to meet different hardware requirements
Large Language Model
T
bartowski
246
1
Mrm8488 Qwen3 14B Ft Limo GGUF
Apache-2.0
Multiple quantized versions of the Qwen3-14B-ft-limo model, generated using the imatrix option of llama.cpp, suitable for different performance and storage requirements
Large Language Model
M
bartowski
866
1
Arliai QwQ 32B ArliAI RpR V4 GGUF
Apache-2.0
A 32B-parameter quantized large language model based on ArliAI/QwQ-32B-ArliAI-RpR-v4, quantized with llama.cpp at various precisions, suitable for text generation tasks.
Large Language Model English
A
bartowski
1,721
1
Tesslate Tessa T1 3B GGUF
Apache-2.0
Tessa-T1-3B is a 3B-parameter large language model based on the Qwen2 architecture, offering multiple quantization versions to accommodate different hardware requirements.
Large Language Model English
T
bartowski
697
6
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase